AI Settings

The AI settings page allows admins to configure system wide AI / machine learning options.

AI Services

Natural Language Query

Enable Natural Language Query to allow users to use the 'Ask a question' in the application using plain English questions.

This feature is available only with Enterprise licensing.

OpenAI ChatGPT

Select Enable Generative AI LLMs to enable the integrated OpenAI ChatGPT interface, which lets users generate images, scripts, and more via ChatGPT.

API Provider

From the API Provider drop-down, choose either OpenAI or Azure OpenAI

OpenAI

If you choose OpenAI as your provider, you'll need to provide your API key:

  • OpenAI API Key: enter your API key, which can be found in the Account Settings of your OpenAI account.
Azure OpenAI

If you choose OpenAI as your provider, you'll need to provide the following:

  • OpenAI API Key: enter your API key, which can be found in your Azure portal in the Endpoint and Keys pane of the Resource Management section.
  • Azure Endpoint: enter your Azure Endpoint URL, which can be found in your Azure portal in the Endpoint and Keys pane of the Resource Management section.
  • Azure Deployment: enter the name of the relevant Azure Deployment Environment.

Scripting Settings

Default Environment

Select which environments for Python and R will act as the defaults for users when creating scripts in the system.

Scaling Mode

Select how the cluster will allocate resources to each scripting environment. The type of selection mode changes the operation of the scripting environment editor.

  • Manual: the admin manually sets which AI servers will host which environments.
  • Automatic - Percentage: this lets the admin assign a percentage "coverage" that each environment should have across all the AI servers in the cluster and lets the engine automatically assign resources accordingly.

Typically, if the cluster is built manually, with additional nodes added on an irregular basis,

the manual model is preferred - because admins are given deep control for how to allocate Python and R resources. If the cluster shrinks and grows automatically (with a Kubernetes deployment for example), then the manual approach is infeasible - and the automatic approach must be used instead.